We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
We propose the Detailed Outline Control (DOC) framework for improving long-range plot coherence when automatically generating several-thousand-word-long stories. DOC consists of two complementary components: a detailed outliner and a detailed controller. The detailed outliner creates a more detailed, hierarchically structured outline, shifting creative burden from the main drafting procedure to the planning stage. The detailed controller ensures the more detailed outline is still respected during generation by controlling story passages to align with outline details. In human evaluations of automatically generated stories, DOC substantially outperforms a strong Re3 baseline (Yang et al., 2022) on plot coherence (22.5% absolute gain), outline relevance (28.2%), and interestingness (20.7%). Humans also judged DOC to be much more controllable in an interactive generation setting.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
Federated learning (FL) enables the building of robust and generalizable AI models by leveraging diverse datasets from multiple collaborators without centralizing the data. We created NVIDIA FLARE as an open-source software development kit (SDK) to make it easier for data scientists to use FL in their research and real-world applications. The SDK includes solutions for state-of-the-art FL algorithms and federated machine learning approaches, which facilitate building workflows for distributed learning across enterprises and enable platform developers to create a secure, privacy-preserving offering for multiparty collaboration utilizing homomorphic encryption or differential privacy. The SDK is a lightweight, flexible, and scalable Python package, and allows researchers to bring their data science workflows implemented in any training libraries (PyTorch, TensorFlow, XGBoost, or even NumPy) and apply them in real-world FL settings. This paper introduces the key design principles of FLARE and illustrates some use cases (e.g., COVID analysis) with customizable FL workflows that implement different privacy-preserving algorithms. Code is available at https://github.com/NVIDIA/NVFlare.
translated by 谷歌翻译
蒙版的视觉建模(MVM)最近已被证明对视觉预训练有效。虽然在视频输入(例如,蒙版框架建模)上进行了类似的重建目标,在视频语言(VIDL)预训练中探索了类似的重建目标,但先前研究中的预提取的视频功能在预训练期间无法通过MVM进行完善,因此无法通过MVM进行完善为下游性能不满意。在这项工作中,我们系统地检查了MVM在VIDL学习的背景下的潜力。具体而言,我们的研究基于完全端到端的视频变压器(Violet),该视频变压器(Violet)减轻了固定视频表示与MVM培训之间的断开连接。总共探索了MVM的八个不同的重建目标,从低级像素值和定向梯度到高级深度图,光流,离散的视觉令牌和潜在的视觉特征。我们进行全面的实验,并就导致有效MVM培训的因素提供见解。从经验上讲,我们展示了通过MVM目标预先训练的紫罗兰色,可以在13个VIDL基准测试中取得显着改进,从视频问题回答,视频字幕到文本到视频检索等等。
translated by 谷歌翻译
在存在未观察到的混杂变量的情况下,通常会偏向治疗效果的估计,这些变量通常称为隐藏变量。尽管最近提出了一些方法来处理隐藏变量的效果,但这些方法通常忽略了观察到的治疗变量与未观察到的协变量之间任何相互作用的可能性。在这项工作中,我们通过研究一个多变量响应回归问题来解决这一缺点x_j z + e $,其中$ y \ in \ mathbb {r}^m $是$ m $ - 二维响应变量,$ x \ in \ mathbb {r}^p $是观察到的covariates(包括处理变量), $ z \ in \ mathbb {r}^k $是$ k $ - 二维不观察的混杂因素,而$ e \ in \ mathbb {r}^m $是随机噪声。允许$​​ x_j $和$ z $之间的相互作用引起异质混杂效果。我们的目标是估算未知的矩阵$ a $,观察到的协变量的直接效果或对响应的处理。为此,我们提出了一种通过SVD进行新的偏见估计方法,以消除未观察到的混杂变量的效果。估计量的收敛速率均在均质和异性噪声下建立。我们还提供了一些模拟实验和一个现实世界数据应用程序,以证实我们的发现。
translated by 谷歌翻译
病理诊断依赖于组织学染色的薄组织样品的目视检查,其中使用不同类型的污渍来对比并突出各种所需的组织学特征。但是,破坏性的组织化学染色程序通常是不可逆的,因此很难在同一组织段上获得多个污渍。在这里,我们通过层叠的深神经网络(C-DNN)演示了虚拟的染色转移框架,以数字化将苏木精和曙红(H&E)染色的组织图像转化为其他类型的组织学染色。与单个神经网络结构不同,该结构仅将一种染色类型作为一种输入来输出另一种染色类型的数字输出图像,C-DNN首先使用虚拟染色将自动荧光显微镜图像转换为H&E,然后执行从H&E到另一个域的染色转换以级联的方式染色。在训练阶段的这种级联结构使该模型可以直接利用H&E和目标特殊污渍的组织化学染色图像数据。该优势减轻了配对数据获取的挑战,并提高了从H&E到另一个污渍的虚拟污渍转移的图像质量和色彩准确性。我们使用肾针芯活检组织切片验证了这种C-DNN方法的出色性能,并将H&E染色的组织图像成功地转移到虚拟PAS(周期性酸 - 雪)染色中。该方法使用现有的,组织化学染色的幻灯片提供了特殊污渍的高质量虚拟图像,并通过执行高度准确的污渍转换来创造数字病理学的新机会。
translated by 谷歌翻译
人类具有以零拍的方式识别和获取新颖的视觉概念的非凡能力。考虑到以前学到的视觉概念及其关系的高级,象征性的描述,人类可以识别新颖的概念而不看到任何例子。此外,他们可以通过学习视觉概念和关系来解析和传达符号结构来获取新概念。赋予机器中的这些功能在提高推理时提高其概括能力方面至关重要。在这项工作中,我们介绍了零拍的概念识别和获取(ZEROC),这是一种神经符号结构,可以以零拍的方式识别和获取新颖的概念。 ZEROC代表概念作为组成概念模型的图(作为节点)及其关系(作为边缘)。为了允许推理时间组成,我们采用基于能量的模型(EBM)来建模概念和关系。我们设计ZEROC架构,以便它允许在概念的符号图结构及其相应的EBM之间进行一对一的映射,该图是第一次允许获取新概念,传达其图形结构并将其应用于分类和分类和在推理时检测任务(甚至跨域)。我们介绍了用于学习和推断ZEROC的算法。我们在一个充满挑战的网格世界数据集上评估了零,该数据集旨在探测零拍的概念识别和获取,并展示其功能。
translated by 谷歌翻译
在过去的几十年中,对生物启发的智能及其对机器人技术的应用非常关注。本文对生物启发的智能进行了全面的调查,重点是神经动力学方法,尤其是对自主机器人系统的路径计划和控制。首先,引入了以生物启发的分流模型及其变体(添加剂模型和门控偶极模型),并详细介绍其主要特征。然后,回顾了实时路径计划和各种机器人系统控制的两个主要神经动力学应用。一个以神经动力学模型为特征的生物启发的神经网络框架,用于移动机器人,清洁机器人和水下机器人。生物启发的神经网络已在无碰撞导航和合作中广泛使用,没有任何学习程序,全球成本功能以及动态环境的先验知识。此外,还进一步讨论了针对各种机器人系统的生物启发的后台控制器,这些控制器能够在发生较大的初始跟踪误差时消除速度跳跃。最后,本文讨论了当前的挑战和未来的研究方向。
translated by 谷歌翻译